Goto

Collaborating Authors

 transparency and reproducibility


One researcher's mission to encourage reproducibility in machine learning

#artificialintelligence

On February 14, a researcher who was frustrated with reproducing the results of a machine learning research paper opened up a Reddit account under the username ContributionSecure14 and posted the r/MachineLearning subreddit: "I just spent a week implementing a paper as a baseline and failed to reproduce the results. I realized today after googling for a bit that a few others were also unable to reproduce the results. Is there a list of such papers? It will save people a lot of time and effort." The post struck a nerve with other users on r/MachineLearning, which is the largest Reddit community for machine learning.


Furious AI researcher creates a list of non-reproducible machine learning papers

#artificialintelligence

On February 14, a researcher who was frustrated with reproducing the results of a machine learning research paper opened up a Reddit account under the username ContributionSecure14 and posted the r/MachineLearning subreddit: "I just spent a week implementing a paper as a baseline and failed to reproduce the results. I realized today after googling for a bit that a few others were also unable to reproduce the results. Is there a list of such papers? It will save people a lot of time and effort." The post struck a nerve with other users on r/MachineLearning, which is the largest Reddit community for machine learning.


Scientists voice concerns, call for transparency and reproducibility in AI research

#artificialintelligence

In an article published in Nature on October 14, 2020, scientists at Princess Margaret Cancer Centre, University of Toronto, Stanford University, Johns Hopkins, Harvard School of Public Health, Massachusetts Institute of Technology, and others, challenge scientific journals to hold computational researchers to higher standards of transparency, and call for their colleagues to share their code, models and computational environments in publications. "Scientific progress depends on the ability of researchers to scrutinize the results of a study and reproduce the main finding to learn from," says Dr. Benjamin Haibe-Kains, Senior Scientist at Princess Margaret Cancer Centre and first author of the article. "But in computational research, it's not yet a widespread criterion for the details of an AI study to be fully accessible. This is detrimental to our progress." The authors voiced their concern about the lack of transparency and reproducibility in AI research after a Google Health study by McKinney et al., published in a prominent scientific journal in January 2020, claimed an artificial intelligence (AI) system could outperform human radiologists in both robustness and speed for breast cancer screening.

  ai research, haibe-kain, transparency and reproducibility, (8 more...)
  Country:
  Genre: Research Report (0.95)
  Industry: Health & Medicine > Therapeutic Area > Oncology (1.00)

Researchers call for transparency and reproducibility in artificial intelligence research

#artificialintelligence

International scientists are challenging their colleagues to make Artificial Intelligence (AI) research more transparent and reproducible to accelerate the impact of their findings for cancer patients. In an article published in Nature on October 14, 2020, scientists at Princess Margaret Cancer Centre, University of Toronto, Stanford University, Johns Hopkins, Harvard School of Public Health, Massachusetts Institute of Technology, and others, challenge scientific journals to hold computational researchers to higher standards of transparency, and call for their colleagues to share their code, models and computational environments in publications. Scientific progress depends on the ability of researchers to scrutinize the results of a study and reproduce the main finding to learn from. But in computational research, it's not yet a widespread criterion for the details of an AI study to be fully accessible. This is detrimental to our progress."

  Country:
  Genre: Research Report (0.54)
  Industry: Health & Medicine > Therapeutic Area > Oncology (1.00)

Researchers take issue with study evaluating an AI system for breast cancer screening – School of Public Health

#artificialintelligence

In a new perspective piece "Transparency and reproducibility in artificial intelligence" published this week in the journal Nature, an international group of scientists including CUNY Graduate School of Public Health and Health Policy (CUNY SPH) Associate Professor Levi Waldron raised concerns about the lack of transparency in publication of artificial intelligence algorithms for health applications. The authors raise concerns about a recent publication in which a group including Google Health reported using artificial intelligence to diagnose breast cancer from mammogram images more accurately than expert human radiologists. The authors contend that restrictive data access procedures, lack of published computer code, and unreported model parameters make it impractically difficult for any other researchers to confirm or extend this work. The piece also highlights tensions over what are appropriate measures to protect patient privacy while allowing the broader research community to contribute methodology and to correct potential errors that could set back progress to the detriment of other patients. "This back-and-forth is one high-profile example of the current state of struggles over who controls data that has played out for decades in the biomedical sciences and other fields," says Professor Waldron.


Transparency and reproducibility in artificial intelligence

#artificialintelligence

A.H. is a shareholder of and receives consulting fees from Altis Labs. H.J.W.L.A. is a shareholder of and receives consulting fees from Onc.AI. B.H.K. is a scientific advisor for Altis Labs. C.M. holds an equity position in Bridge7Oncology and receives royalties from RaySearch Laboratories.


Scientists voice concerns, call for transparency and reproducibility in AI research

#artificialintelligence

IMAGE: Dr. Benjamin Haibe-Kains, Senior Scientist at Princess Margaret Cancer Centre, a part of University Health Network, is first author on the article published in October's issue of Nature. TORONTO, CANADA ---International scientists are challenging their colleagues to make Artificial Intelligence (AI) research more transparent and reproducible to accelerate the impact of their findings for cancer patients. In an article published in Nature on October 14, 2020, scientists at Princess Margaret Cancer Centre, University of Toronto, Stanford University, Johns Hopkins, Harvard School of Public Health, Massachusetts Institute of Technology, and others, challenge scientific journals to hold computational researchers to higher standards of transparency, and call for their colleagues to share their code, models and computational environments in publications. "Scientific progress depends on the ability of researchers to scrutinize the results of a study and reproduce the main finding to learn from," says Dr. Benjamin Haibe-Kains, Senior Scientist at Princess Margaret Cancer Centre and first author of the article. "But in computational research, it's not yet a widespread criterion for the details of an AI study to be fully accessible. This is detrimental to our progress."